Convolutional Neural Networks

Project: Write an Algorithm for a Dog Identification App


In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.

The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.


Why We're Here

In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

Sample Dog Output

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!

The Road Ahead

We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.

  • Step 0: Import Datasets
  • Step 1: Detect Humans
  • Step 2: Detect Dogs
  • Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
  • Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
  • Step 5: Write your Algorithm
  • Step 6: Test Your Algorithm

Step 0: Import Datasets

Make sure that you've downloaded the required human and dog datasets:

  • Download the dog dataset. Unzip the folder and place it in this project's home directory, at the location /dogImages.

  • Download the human dataset. Unzip the folder and place it in the home directory, at location /lfw.

Note: If you are using a Windows machine, you are encouraged to use 7zip to extract the folder.

In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays human_files and dog_files.

In [1]:
import numpy as np
from glob import glob

# load filenames for human and dog images
human_files = np.array(glob("lfw/*/*"))
dog_files = np.array(glob("dogImages/*/*/*"))

# print number of images in each dataset
print('There are %d total human images available.' % len(human_files))
print('There are %d total dog images available.' % len(dog_files))
There are 13233 total human images available.
There are 8351 total dog images available.

Step 1: Detect Humans

In this section, we use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images.

OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.

In [2]:
import cv2                
import matplotlib.pyplot as plt                        
%matplotlib inline                               

# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')

# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# find faces in image
faces = face_cascade.detectMultiScale(gray)

# print number of faces detected in the image
print('Number of faces detected using cv2 haar cascade classifier:', len(faces))

# get bounding box for each detected face
for (x,y,w,h) in faces:
    # add bounding box to color image
    cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
    
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
Number of faces detected using cv2 haar cascade classifier: 1

Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.

Write a Human Face Detector

We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.

In [3]:
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
    img = cv2.imread(img_path)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray)
    return len(faces) > 0

(IMPLEMENTATION) Assess the Human Face Detector

Question 1: Use the code cell below to test the performance of the face_detector function.

  • What percentage of the first 100 images in human_files have a detected human face?
  • What percentage of the first 100 images in dog_files have a detected human face?

Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.

Answer: (You can print out your results and/or write your percentages in this cell)

In [4]:
from tqdm import tqdm

# Pick test samples from human and dog files
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]

#-#-# Do NOT modify the code above this line. #-#-#

## TODO: Test the performance of the face_detector algorithm 
## on the images in human_files_short and dog_files_short.

# Evaluation function to test the face detector on a given file list
def eval_face_detector(file_list):
    face_detection_count = 0
    number_of_images = len(file_list)
    for file in file_list:
        face_detection_count += face_detector(file)
    return face_detection_count, number_of_images
In [5]:
# Evaluate the face detector on human files
face_detections_in_human_files, num_of_human_images = eval_face_detector(human_files_short)

# Evaluate the face detector on dog files
face_detections_in_dog_files, num_of_dog_images = eval_face_detector(dog_files_short)
In [6]:
# Percentage of (likely true positive) human detections in human files
if num_of_human_images:
    print('Percentage of images with human detections in human_files_short: %d / %d resp. %3.1f %%\n' % 
          (face_detections_in_human_files, 
           num_of_human_images, 
           face_detections_in_human_files/num_of_human_images*100))
else:
    print('No human sample images selected.')

# Percentage of (most likely false positive) human detections in dog files
if num_of_dog_images:
    print('Percentage of images with human detections in dog_files_short: %d / %d resp. %3.1f %%\n' % 
          (face_detections_in_dog_files, 
           num_of_dog_images, 
           face_detections_in_dog_files/num_of_dog_images*100))
else:
    print('No dog sample images selected.')
Percentage of images with human detections in human_files_short: 97 / 100 resp. 97.0 %

Percentage of images with human detections in dog_files_short: 9 / 100 resp. 9.0 %

Answers to Question 1:

What percentage of the first 100 images in human_files have a detected human face? => 97.0 %
What percentage of the first 100 images in dog_files have a detected human face? => 9.0 %

We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.

In [7]:
import operator
In [8]:
### (Optional) 
### TODO: Test performance of another face detection algorithm.
### Feel free to use as many code cells as needed.
def face_bb_detector(img_path, debug_mode=False):
    # Read image stored at img_path (default: BGR format)
    img_bgr = cv2.imread(img_path)
    # Convert BGR color to GRAY scale image
    img_gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY)
    # Detect faces using haar cascade classifier
    faces = face_cascade.detectMultiScale(img_gray)
    # Face detection counter
    face_count = 0
    # Get the bounding box for each detected face
    for (x,y,w,h) in faces:
        # Increment face counter
        face_count += 1
        # Add bounding box to color image
        cv2.rectangle(img_bgr,(x,y),(x+w,y+h),(255,0,0),2)
    if debug_mode:
        # Print number of faces detected in the image
        print('Number of faces detected in image:', len(faces))
        # Convert BGR image to RGB for plotting
        img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
        # Display the image, along with bounding box
        plt.imshow(img_rgb)
        plt.show()
    # Return the detected bounding boxes in the image
    return faces, face_count
In [9]:
# Select index
index = 99

# Sample human_files data set
face_bb_detector(human_files[index], debug_mode=True)

# Sample dog_files data set
face_bb_detector(dog_files[index], debug_mode=True)
Number of faces detected in image: 1
Number of faces detected in image: 1
Out[9]:
(array([[129, 125,  56,  56]], dtype=int32), 1)
In [10]:
# Evaluation function to test the face bounding box detector on a given file list
def eval_face_bb_detector(file_list, debug_mode=False):
    # Count number of detections with 1 face per image, 2 faces per image, 3 faces per image, ... in bins
    face_detections_per_image_bins = {}
    detected_images_with_faces = 0
    total_number_of_images = len(file_list)
    for file in file_list:
        faces, faces_per_image = face_bb_detector(file, debug_mode=debug_mode)
        if faces_per_image > 0:
            detected_images_with_faces += 1
        if faces_per_image in face_detections_per_image_bins:
            face_detections_per_image_bins[faces_per_image] += 1
        else:
            face_detections_per_image_bins[faces_per_image] = 1
    return detected_images_with_faces, total_number_of_images, face_detections_per_image_bins
In [74]:
# Evaluate and print the face bounding box detection results on human_files_short
human_images_with_face_detections, num_of_human_images, face_detections_per_human_image_bins = \
    eval_face_bb_detector(human_files_short, debug_mode=False) # set debug_mode=True to plot face detections
In [12]:
# Evaluate human face detection on images with human faces:
sorted_face_detections_per_human_image_bins = sorted(face_detections_per_human_image_bins.items(), 
                                                     key=operator.itemgetter(0))
# Show how many human faces have been detected in how many of the human images in human_files_short
print('Test human face detection with cv2 haar cascade classifier on human images:\n')
for key, value in sorted_face_detections_per_human_image_bins:
    print('Number of human images with %d human face bounding box detection(s) per image: %d of %d\n' % 
          (key, value, num_of_human_images))
# Total number of human images with face detections (no matter how many face detections per image)
print('Human images with face bounding box detections in total: %d of %d' % 
      (human_images_with_face_detections, num_of_human_images))
Test human face detection with cv2 haar cascade classifier on human images:

Number of human images with 0 human face bounding box detection(s) per image: 3 of 100

Number of human images with 1 human face bounding box detection(s) per image: 91 of 100

Number of human images with 2 human face bounding box detection(s) per image: 5 of 100

Number of human images with 3 human face bounding box detection(s) per image: 1 of 100

Human images with face bounding box detections in total: 97 of 100
In [75]:
# Evaluate and print the face bounding box detection results on dog images
dog_images_with_face_detections, num_of_dog_images, face_detections_per_dog_image_bins = \
    eval_face_bb_detector(dog_files_short, debug_mode=False) # set debug_mode=True to plot face detections
In [14]:
# Evaluate human face detection on images with dogs:
sorted_face_detections_per_dog_image_bins = sorted(face_detections_per_dog_image_bins.items(), 
                                                     key=operator.itemgetter(0))
# Show how many human faces have been detected in how many of the dog images in dog_files_short
print('Test human face detection with cv2 haar cascade classifier on dog images:\n')
for key, value in sorted_face_detections_per_dog_image_bins:
    print('Number of dog images with %d human face bounding box detection(s) per image: %d of %d\n' % 
          (key, value, num_of_dog_images))
# Show total number of dog images with face detections (no matter how many face detections per image)
print('Dog images with face bounding box detections in total: %d of %d' % 
      (dog_images_with_face_detections, num_of_dog_images))
Test human face detection with cv2 haar cascade classifier on dog images:

Number of dog images with 0 human face bounding box detection(s) per image: 91 of 100

Number of dog images with 1 human face bounding box detection(s) per image: 9 of 100

Dog images with face bounding box detections in total: 9 of 100

Evaluation of an alternative built-in face detector ensemble from openCV libarary

Alternative approach for human face detection using MTCNN classifier ensemble from openCV library

In [15]:
### (Optional) 
### TODO: Test performance of another face detection algorithm.
### Feel free to use as many code cells as needed.
from facenet_pytorch import MTCNN, extract_face # This requires pytorch 
import torch
import numpy as np
import cv2
from PIL import Image, ImageDraw
from IPython import display
In [16]:
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Running on device: {}'.format(device))
Running on device: cuda:0
In [17]:
def human_face_detector(img_path, debug_mode=True):
    
    # Set up face detector
    mtcnn_face_detector = MTCNN()
    
    # Convert image to RBG color format
    image = cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB)
        
    # Detect faces
    # Returns bounding boxes = None, confidence_values = [None] and landmarks = None if no faces are detected
    bounding_boxes, confidence_values, landmarks = mtcnn_face_detector.detect(image, landmarks=True)
    
    # Check if any faces have been detected
    if bounding_boxes is None:
        
        # Set number of faces per image to zero
        faces_per_image = 0
        
        # Return the bounding boxes = None, confidence_values = [None], landmarks = None, faces_per_image = 0
        return bounding_boxes, confidence_values, landmarks, faces_per_image
    else:
        
        # Get number of faces per image
        faces_per_image = len(bounding_boxes)
        
        # Add detected face bounding boxes with confidence values and keypoints to the image
        for (bounding_box, confidence, keypoints) in zip(bounding_boxes, confidence_values, landmarks):
                        
            # Get bounding box position
            upper_left_corner  = (bounding_box[0], bounding_box[1])
            lower_right_corner = (bounding_box[2], bounding_box[3])
            
            # Add face bounding box to the image
            cv2.rectangle(image,
                          upper_left_corner,
                          lower_right_corner,
                          (0,155,255),
                          2)
            
            # Text annotation format
            font                   = cv2.FONT_HERSHEY_SIMPLEX
            bottomLeftCornerOfText = (bounding_box[0], bounding_box[1])
            fontScale              = 1
            fontColor              = (255,255,255)
            lineType               = 2
            
            # Annotate face bounding box with the corresponding confidence value
            cv2.putText(image, str(confidence), 
                        bottomLeftCornerOfText, 
                        font, 
                        fontScale, 
                        fontColor, 
                        lineType)
            
            # Get face keypoints (left_eye, right_eye, nose, mouth_left, mouth_right) for each face bounding box  
            left_eye = tuple(keypoints[0])
            right_eye = tuple(keypoints[1])
            nose = tuple(keypoints[2])
            mouth_left = tuple(keypoints[3])
            mouth_right = tuple(keypoints[4])
            
            # Add keypoints (left_eye, right_eye, nose, mouth_left, mouth_right) to the image    
            cv2.circle(image, left_eye, 2, (0,155,255), 2)
            cv2.circle(image, right_eye, 2, (0,155,255), 2)
            cv2.circle(image, nose, 2, (0,155,255), 2)
            cv2.circle(image, mouth_left, 2, (0,155,255), 2)
            cv2.circle(image, mouth_right, 2, (0,155,255), 2)
    
        if debug_mode:
            
            # Print number of faces detected in the image
            print('Number of faces detected in image:', faces_per_image)
            
            # Display the image, along with bounding box
            plt.imshow(image)
            plt.show()
            
        # Return the bounding boxes, the probabilities and the landmarks of all detected human faces in the image
        return bounding_boxes, confidence_values, landmarks, faces_per_image
In [18]:
# Select index
index = 99

# Sample human_files data set
boxes, probs, _, _ = human_face_detector(human_files[index], debug_mode=True)

# Sample dog_files data set
boxes, probs, _, _ = human_face_detector(dog_files[index], debug_mode=True)
Number of faces detected in image: 1
Number of faces detected in image: 2
In [19]:
# Evaluation function to test the face bounding box detector on a given file list
def eval_human_face_detector(file_list, debug_mode=False):
    # Count number of detections with 1 face per image, 2 faces per image, 3 faces per image, ... in bins
    face_detections_per_image_bins = {}
    detected_images_with_faces = 0
    total_number_of_images = len(file_list)
    for file in file_list:
        _ , _ , _ , faces_per_image = human_face_detector(file, debug_mode=debug_mode)
        if faces_per_image > 0:
            detected_images_with_faces += 1
        if faces_per_image in face_detections_per_image_bins:
            face_detections_per_image_bins[faces_per_image] += 1
        else:
            face_detections_per_image_bins[faces_per_image] = 1
    return detected_images_with_faces, total_number_of_images, face_detections_per_image_bins
In [72]:
# Evaluate face bounding box detection using cv2's mtcnn classifier on human images
human_images_with_face_detections, num_of_human_images, face_detections_per_human_image_bins = \
    eval_human_face_detector(human_files_short, debug_mode=False) # set debug_mode=True to plot face detections
In [21]:
# Evaluate human face detection using cv2's mtcnn classifier on images with human faces:
sorted_face_detections_per_human_image_bins = sorted(face_detections_per_human_image_bins.items(), 
                                                     key=operator.itemgetter(0))
# Show how many human faces have been detected in how many of the human images in human_files_short
print('Test human face detection using cv2''s mtcnn classifier on human images:\n')
for key, value in sorted_face_detections_per_human_image_bins:
    print('Number of human images with %d human face bounding box detection(s) per image: %d of %d\n' % 
          (key, value, num_of_human_images))
# Total number of human images with face detections (no matter how many face detections per image)
print('Human images with face bounding box detections in total: %d of %d' % 
      (human_images_with_face_detections, num_of_human_images))
Test human face detection using cv2s mtcnn classifier on human images:

Number of human images with 1 human face bounding box detection(s) per image: 80 of 100

Number of human images with 2 human face bounding box detection(s) per image: 14 of 100

Number of human images with 3 human face bounding box detection(s) per image: 4 of 100

Number of human images with 4 human face bounding box detection(s) per image: 1 of 100

Number of human images with 6 human face bounding box detection(s) per image: 1 of 100

Human images with face bounding box detections in total: 100 of 100
In [73]:
# Evaluate face bounding box detection using cv2's mtcnn classifier on dog images
dog_images_with_face_detections, num_of_dog_images, face_detections_per_dog_image_bins = \
    eval_human_face_detector(dog_files_short, debug_mode=False) # set debug_mode=True to plot face detections
In [23]:
# Evaluate human face detection using cv2's mtcnn classifier on images with dogs:
sorted_face_detections_per_dog_image_bins = sorted(face_detections_per_dog_image_bins.items(), 
                                                     key=operator.itemgetter(0))
# Show how many human faces have been detected in how many of the dog images in dog_files_short
print('Test human face detection using cv2''s mtcnn classifier on dog images:\n')
for key, value in sorted_face_detections_per_dog_image_bins:
    print('Number of dog images with %d human face bounding box detection(s) per image: %d of %d\n' % 
          (key, value, num_of_dog_images))
# Show total number of dog images with face detections (no matter how many face detections per image)
print('Dog images with face bounding box detections in total: %d of %d' % 
      (dog_images_with_face_detections, num_of_dog_images))
Test human face detection using cv2s mtcnn classifier on dog images:

Number of dog images with 0 human face bounding box detection(s) per image: 64 of 100

Number of dog images with 1 human face bounding box detection(s) per image: 33 of 100

Number of dog images with 2 human face bounding box detection(s) per image: 3 of 100

Dog images with face bounding box detections in total: 36 of 100

Step 2: Detect Dogs

In this section, we use a pre-trained model to detect dogs in images.

Obtain Pre-trained VGG-16 Model

The code cell below downloads the VGG-16 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories.

In [24]:
import torch
import torchvision.models as models

# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
print(VGG16)

# check if CUDA is available
use_cuda = torch.cuda.is_available()

# move model to GPU if CUDA is available
if use_cuda:
    VGG16 = VGG16.cuda()
VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace=True)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace=True)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace=True)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.

(IMPLEMENTATION) Making Predictions with a Pre-trained Model

In the next code cell, you will write a function that accepts a path to an image (such as 'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg') as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.

Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the PyTorch documentation.

In [25]:
from PIL import Image
import torchvision.transforms as transforms

# Set PIL to be tolerant of image files that are truncated.
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

def VGG16_predict(img_path, debug_mode):
    '''
    Use pre-trained VGG-16 model to obtain index corresponding to 
    predicted ImageNet class for image at specified path
    
    Args:
        img_path: path to an image
        
    Returns:
        Index corresponding to VGG-16 model's prediction
    '''
    
    ## TODO: Complete the function.
    ## Load and pre-process an image from the given img_path
    ## Return the *index* of the predicted class for that image
    
    # Set up image transformations expected by the model
    predict_transforms = transforms.Compose([transforms.Resize(size=(336,336)),
                                             transforms.RandomRotation(30),
                                             transforms.CenterCrop(size=(224,224)),
                                             transforms.RandomResizedCrop(size=(224,224),
                                                                          scale=(0.8, 1.0),
                                                                          ratio=(0.75, 1.3333333333333333),
                                                                          interpolation=2),
                                             transforms.RandomHorizontalFlip(),
                                             transforms.ToTensor(),
                                             transforms.Normalize([0.485, 0.456, 0.406],
                                                                  [0.229, 0.224, 0.225])])
    
    # Open input image using PIL / pillow
    input_image = Image.open(img_path)
    if debug_mode:
        plt.imshow(input_image)
    
    # Transform input image to input tensor
    input_tensor = predict_transforms(input_image)
    
    # Reshape input tensor and move to cuda device
    if use_cuda:
        input_tensor = input_tensor.view(1, 3, 224, 224).cuda()
    else:
        input_tensor = input_tensor.view(1, 3, 224, 224)
    
    # Set VGG16 model to evaluation mode
    VGG16.eval()
    
    # Switch off gradients for forward prediction step
    with torch.no_grad():
        
        # Move input images to the default device
        #input_tensor = input_tensor.to(device)
        
        # Get log probabilities from the model output
        ps = VGG16.forward(input_tensor)
        
        # Get and print the top candicate
        topk, topclass = ps.topk(1, dim=1)
        
        # Move predicted topclass back to cpu
        if debug_mode:
            print(topk.cpu())
            print('Detected top class: ', topclass.cpu())
        else:
            topk.cpu()
            topclass.cpu()
        
    return topclass # predicted class index
In [26]:
# Test VGG16 prediction
print(VGG16_predict(dog_files[0], debug_mode=False))
tensor([[182]], device='cuda:0')

(IMPLEMENTATION) Write a Dog Detector

While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).

Use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).

In [70]:
# Import imagenet class label index dictionary from file using inbuilt eval() function
with open('imagenet1000_clsidx_to_labels.txt','r') as inf:
    imagenet_idx_to_classlabel = eval(inf.read())
# print(imagenet_idx_to_classlabel)
In [71]:
# Extract sub-dictionary with dog class labels only
idx_to_dogclasslabel = {}
for k in range(151, 269):
    if k in imagenet_idx_to_classlabel:
        idx_to_dogclasslabel[k] = imagenet_idx_to_classlabel[k]
# print(idx_to_dogclasslabel)
In [29]:
### returns "True" if a dog is detected in the image stored at img_path
def VGG16_dog_detector(img_path, debug_mode=False):
    ## TODO: Complete the function.
    
    # Dog detector based on pretrained (unmodified) VGG16
        
    # Use VGG16 to predict an object class index
    obj_class_idx = VGG16_predict(img_path, debug_mode=debug_mode)
    
    # Move obj_class_idx tensor to cpu and convert to integer using numpy()
    class_index = int(obj_class_idx.cpu().numpy().squeeze())
    
    # Check if it's a dog or not: If ob_class_index is within the range 151 and 268 (inclusive) then it is a dog
    prediction = class_index in idx_to_dogclasslabel
    
    # Get true image net class label
    class_label = imagenet_idx_to_classlabel[class_index]
    
    # Return results on cpu
    return prediction, class_index, class_label
In [30]:
# Select index
index = 9

# Sample human_files data set
img = human_files[index]

# Run dog detector
prediction, class_index, class_label = VGG16_dog_detector(img, debug_mode=True)

# Print prediction resultsa
print(prediction)
print(class_index)
print(class_label)
tensor([[7.1532]])
Detected top class:  tensor([[906]])
False
906
Windsor tie
In [31]:
# Select index
index = 9

# Sample dog_files data set
img = dog_files[index]

# Run dog detector
prediction, class_index, class_label = VGG16_dog_detector(img, debug_mode=True)

# Print prediction results
print(prediction)
print(class_index)
print(class_label)
tensor([[21.1373]])
Detected top class:  tensor([[182]])
True
182
Border terrier

(IMPLEMENTATION) Assess the Dog Detector

Question 2: Use the code cell below to test the performance of your dog_detector function.

  • What percentage of the images in human_files_short have a detected dog?
  • What percentage of the images in dog_files_short have a detected dog?

Answer:

In [32]:
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.

# Calculate the percentage of images in human_files_short detected as a dog (false positive)
dog_detections_in_human_files_short = np.zeros(human_files_short.shape)
for idx, file in enumerate(human_files_short):
    prediction, class_index, class_label = VGG16_dog_detector(file)
    if prediction:
        dog_detections_in_human_files_short[idx] = 1
print('Percentage of the images in human_files_short classified as a dog (False Positives): ', \
      np.sum(dog_detections_in_human_files_short)/len(human_files_short)*100, '%')

# Calculate the percentage of images in dog_files_short with a detected human face
dog_detections_in_dog_files_short = np.zeros(dog_files_short.shape)
for idx, file in enumerate(dog_files_short):
    prediction, class_index, class_label = VGG16_dog_detector(file)
    if prediction:
        dog_detections_in_dog_files_short[idx] = 1
print('Percentage of images in dog_files_short classified as a dog (True Positives): ', \
      np.sum(dog_detections_in_dog_files_short)/len(dog_files_short)*100, '%')
Percentage of the images in human_files_short classified as a dog (False Positives):  6.0 %
Percentage of images in dog_files_short classified as a dog (True Positives):  100.0 %

We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as Inception-v3, ResNet-50, etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.

In [33]:
### (Optional) 
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.
import torch
import torchvision.models as models

# define ResNet50 model using pre-trained weights
ResNet50 = models.resnet50(pretrained=True)
print(ResNet50)

# define Inception-V3 model using pretrained weights
#InceptionV3 = models.inception_v3(pretrained=True)
#print(InceptionV3)

# define GoogLeNet model using pretrained weights
#GoogLeNet = models.googlenet(pretrained=True)
#print(GoogLeNet)

# check if CUDA is available
use_cuda = torch.cuda.is_available()

# move model to GPU if CUDA is available
if use_cuda:
    ResNet50 = ResNet50.cuda()
    print('ResNet50 will run with cuda support.')
ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer2): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer3): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (4): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (5): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer4): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=2048, out_features=1000, bias=True)
)
ResNet50 will run with cuda support.
In [34]:
# from PIL import Image
# import torchvision.transforms as transforms
# from PIL import ImageFile
# ImageFile.LOAD_TRUNCATED_IMAGES = True     # Set PIL to be tolerant of image files that are truncated

def ResNet50_predict(img_path, debug_mode=False):
    '''
    Use pre-trained ResNet50 model to obtain index corresponding to 
    predicted ImageNet class for image at specified path
    
    Args:
        img_path: path to an image
        
    Returns:
        Index corresponding to ResNet50 model's prediction
    '''
    
    ## TODO: Complete the function.
    ## Load and pre-process an image from the given img_path
    ## Return the *index* of the predicted class for that image
        
    # Set up image transformations expected by the model
    predict_transforms = transforms.Compose([transforms.Resize(size=256),
                                       transforms.RandomResizedCrop(224),
                                       transforms.ToTensor(),
                                       transforms.Normalize([0.485, 0.456, 0.406], 
                                                            [0.229, 0.224, 0.225])])
    
    # Open input image using PIL / pillow
    input_image = Image.open(img_path)
    if debug_mode:
        plt.imshow(input_image)
    
    # Transform input image to input tensor
    input_tensor = predict_transforms(input_image)
    
    # Reshape input tensor and move to cuda device
    if use_cuda:
        input_tensor = input_tensor.view(1, 3, 224, 224).cuda()
    else:
        input_tensor = input_tensor.view(1, 3, 224, 224)
    
    # Set ResNet50 model to evaluation mode
    ResNet50.eval()
    
    # Switch off gradients for forward prediction step
    with torch.no_grad():
        
        # Move input images to the default device
        #input_tensor = input_tensor.to(device)
        
        # Get log probabilities from the model output
        ps = ResNet50.forward(input_tensor)
        
        # Get and print the top candicate
        topk, topclass = ps.topk(1, dim=1)
        
        # Move predicted topclass back to cpu
        if debug_mode:
            print(topk.cpu())
            print('Detected top class: ', topclass.cpu())
        else:
            topclass.cpu()
        
    return topclass # predicted class index
In [35]:
### returns "True" if a dog is detected in the image stored at img_path
def ResNet50_dog_detector(img_path, debug_mode=False):
    ## TODO: Complete the function.
        
    # Use ResNet50 to predict an object class index
    obj_class_idx = ResNet50_predict(img_path, debug_mode=debug_mode)
    
    # Move obj_class_idx tensor to cpu and convert to integer using numpy()
    class_index = int(obj_class_idx.cpu().numpy().squeeze())
    
    # Check if it's a dog or not: If ob_class_index is within the range 151 and 268 (inclusive) then it is a dog
    prediction = class_index in idx_to_dogclasslabel
    
    # Get true image net class label
    class_label = imagenet_idx_to_classlabel[class_index]
    
    # Return results on cpu
    return prediction, class_index, class_label
In [36]:
# Select index
index = 27

# Sample human_files data set
img = human_files[index]

# Run dog detector
prediction, class_index, class_label = ResNet50_dog_detector(img, debug_mode=True)

# Print prediction results
print(prediction)
print(class_index)
print(class_label)
tensor([[5.6329]])
Detected top class:  tensor([[678]])
False
678
neck brace
In [37]:
# Select index
index = 27

# Sample dog_files data set
img = dog_files[index]

# Run dog detector
prediction, class_index, class_label = ResNet50_dog_detector(img, debug_mode=True)

# Print prediction results
print(prediction)
print(class_index)
print(class_label)
tensor([[19.6409]])
Detected top class:  tensor([[182]])
True
182
Border terrier
In [38]:
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.

# Calculate the percentage of images in human_files_short detected as a dog (false positive)
dog_detections_in_human_files_short = np.zeros(human_files_short.shape)
for idx, file in enumerate(human_files_short):
    prediction, class_index, class_label = ResNet50_dog_detector(file)
    if prediction:
        dog_detections_in_human_files_short[idx] = 1
print('Percentage of the images in human_files_short classified as a dog (False Positives): ', \
      np.sum(dog_detections_in_human_files_short)/len(human_files_short)*100, '%')

# Calculate the percentage of images in dog_files_short with a detected human face
dog_detections_in_dog_files_short = np.zeros(dog_files_short.shape)
for idx, file in enumerate(dog_files_short):
    prediction, class_index, class_label = ResNet50_dog_detector(file)
    if prediction:
        dog_detections_in_dog_files_short[idx] = 1
print('Percentage of images in dog_files_short classified as a dog (True Positives): ', \
      np.sum(dog_detections_in_dog_files_short)/len(dog_files_short)*100, '%')
Percentage of the images in human_files_short classified as a dog (False Positives):  1.0 %
Percentage of images in dog_files_short classified as a dog (True Positives):  100.0 %

Step 3: Create a CNN to Classify Dog Breeds (from Scratch)

Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.

We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.

Brittany Welsh Springer Spaniel

It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).

Curly-Coated Retriever American Water Spaniel

Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.

Yellow Labrador Chocolate Labrador Black Labrador

We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.

Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!

(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset

Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively). You may find this documentation on custom datasets to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of transforms!

In [39]:
import os
import torch
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
import numpy as np

#import cv2                
import matplotlib.pyplot as plt                        
%matplotlib inline                               

# Check if CUDA is available
use_cuda = torch.cuda.is_available()
In [40]:
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes

# number of subprocesses to use for data loading
num_workers = 2
# how many samples per batch to load
batch_size = 32
# percentage of training set to use as validation
valid_size = 0.2

# Convert data to torch.FloatTensor
image_transforms = transforms.Compose([transforms.Resize(size=(336,336)),
                                       transforms.RandomRotation(30),                                       
                                       transforms.CenterCrop(size=(224,224)),
                                       transforms.RandomResizedCrop(size=(224,224), 
                                                                    scale=(0.8, 1.0), 
                                                                    ratio=(0.75, 1.3333333333333333), 
                                                                    interpolation=2),
                                       transforms.RandomHorizontalFlip(),
                                       transforms.ToTensor(),
                                       transforms.Normalize([0.485, 0.456, 0.406], 
                                                            [0.229, 0.224, 0.225])])

# Directory where training, validation and testing data are stored
data_dir = "dogImages"

# Pass transforms in here, then run the next cell to see how the transforms look
image_datasets = {x: datasets.ImageFolder(root=os.path.join(data_dir, x), transform=image_transforms) 
                 for x in ['train', 'test', 'valid']}

# Obtain training indices that will be used for validation
num_train = len(image_datasets['train'])
train_idx = list(range(num_train))
num_valid = len(image_datasets['valid'])
valid_idx = list(range(num_valid))
np.random.shuffle(train_idx)
np.random.shuffle(valid_idx)

# Define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)

# Prepare the data loaders
train_loader = torch.utils.data.DataLoader(image_datasets['train'], 
                                           batch_size=batch_size,
                                           sampler=train_sampler, 
                                           num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(image_datasets['valid'], 
                                           batch_size=batch_size,
                                           sampler=valid_sampler,
                                           num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(image_datasets['test'], 
                                          batch_size=batch_size,
                                          num_workers=num_workers)
loaders_scratch = {'train': train_loader, 'valid': valid_loader, 'test': test_loader}
In [41]:
# Get class names and number of classes in training, validation and test dataset
class_names = {x: image_datasets[x].classes for x in ['train', 'valid', 'test']}
number_of_classes = {x: len(class_names[x]) for x in ['train', 'valid', 'test']}

print("Number of classes in training dataset:  ", number_of_classes['train'])
print("Number of classes in validation dataset:", number_of_classes['valid'])
print("Number of classes in test dataset:      ", number_of_classes['test'])
if all([number_of_classes['valid'] == number_of_classes['train'], number_of_classes['test'] == number_of_classes['train']]):
    print('\nAll class labels are contained in training, validation and testing dataset => Check i. O.')
elif number_of_classes['valid'] != number_of_classes['train']:
    print('\nThe number of classes in validation and training dataset are not the same! => Check n. i. O.')
else:
    print('\nThe number of classes in test and training dataset are not the same! => Check n. i. O.')
Number of classes in training dataset:   133
Number of classes in validation dataset: 133
Number of classes in test dataset:       133

All class labels are contained in training, validation and testing dataset => Check i. O.
In [76]:
# Get dictionary with gog breeds and indices from 1 to number_of_classes
dog_breeds = {}
for key, value in zip(range(1, number_of_classes['train']+1), image_datasets['train'].classes):
    dog_breeds[key] = value
# print(dog_breeds)
In [43]:
# Visualize one batch of images in the training dataset
images, classes = next(iter(loaders_scratch['train']))

print('\nOne sample batch of dog images from the training data set:')
print('\nImage format = ', images[0].shape)

for image, class_label in zip(images, classes):
    # Detach he tensor from the current graph, maka a copy of this tensor and move it gpu to cpu
    image = image.to('cpu').clone().detach()
    # Transform tensor to numpy array and squeeze singular dimensions
    image = image.numpy().squeeze()
    class_label = class_label.numpy().squeeze()
    # Transpose numpy array => shift color axis to the back
    image = image.transpose(1,2,0)
    # Invert image normalization
    image = image * np.array((0.229, 0.224, 0.225)) + np.array((0.485, 0.456, 0.406))
    # Clip input data to valid range for imshow with RGB data
    image = image.clip(0, 1)
    
    # Show image
    fig = plt.figure(figsize=(12,3))
    plt.imshow(image)
    plt.title(class_names['train'][class_label])
One sample batch of dog images from the training data set:

Image format =  torch.Size([3, 224, 224])
/home/andreas/.local/lib/python3.6/site-packages/matplotlib/pyplot.py:514: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  max_open_warning, RuntimeWarning)

Question 3: Describe your chosen procedure for preprocessing the data.

  • How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
  • Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?

Answers to Question 3: All pre-trained torchvision models (expect for the input images) are normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. This hand-crafted CNN trained from the scratch should thus also be tailored to the same image format. So I have picket the same 224 x 224 x 3 input tensor size as pretrained CNN models included in torch models (VGG16, ResNet50, ...) to have the same format and thus a comparision.

The following image transformations (resizing, cropping and normalization plus data augmentation by rotation and horizontal flipping) have been applied in this order:

1. Resizing to 336x336 pixel to make sure to get frames without black rim when rotating the original image by up to +/-30°
2. Random rotation of the resized image by up to +/-30°
3. Random resize by factor 0.8...1.0 and by ratio 3/4...4/3
4. Crop the center part of the image with w x h = 224 x 224 rgb format needed by the detector
5. Random flip the image horizontally
6. Transform image array to torch tensor
7. Normalize color image over all three color channels to improve cnn training process using the same normalization parameters as for the pre-trained models in torchvision.models

Remark: I have also tried color jitter but that did not lead to an immediate success. So for now I stuck with the above mentioned transformations as in the course examples.

(IMPLEMENTATION) Model Architecture

Create a CNN to classify dog breed. Use the template in the code cell below.

In [44]:
import torch.nn as nn
import torch.nn.functional as F

# define the CNN architecture
class Net(nn.Module):
    ### TODO: choose an architecture, and complete the class
    def __init__(self):
        super(Net, self).__init__()
        ## Define layers of a CNN
        
        # Feature extractor
        
        # First convolutional layer with 3 x 3 x 3 filter kernel (sees a 3 x 224 x 224 tensor)
        self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1, padding=1)
        # Second convolutional layer with 16 x 3 x 3 filter kernel (sees a 16 x 112 x 112 tensor)
        self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=1)
        # Thrid convolutional layer with 32 x 3 x 3 filter kernel (sees 32 x 56 x 56 tensor)
        self.conv3 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=1)

        # Max pooling layer
        self.pool = nn.MaxPool2d(2, 2)
        
        # Classifier
        
        # Fully-connected linear layer 1 with 512 nodes (sees a 64 * 28 * 28 tensor)
        self.fc1 = nn.Linear(64 * 28 * 28, 399)
        # Fully-connected linear layer 2 with 133 nodes (sees a 399 x 1 tensor)
        self.fc2 = nn.Linear(399, 133)
        
        # Regularization layers
        
        # Dropout layer (p=0.25)
        self.dropout = nn.Dropout(0.25)
        
        # Batch normalization layer for the last linear layer
        self.batch_norm = nn.BatchNorm1d(num_features=399)
    
    def forward(self, x):
        ## Define forward behavior
        
        # Feature extractor
        
        # Add convolutional layer 1 with relu activation function and max pooling
        x = self.pool(F.relu(self.conv1(x)))
        
        # Add dropout layer
        x = self.dropout(x)
        
        # Add convolutional layer 2 with relu activation function and max pooling
        x = self.pool(F.relu(self.conv2(x)))
        
        # Add dropout layer
        x = self.dropout(x)
        
        # Add convolutional layer 3 with relu activation function and max pooling
        x = self.pool(F.relu(self.conv3(x)))

        # Add dropout layer
        x = self.dropout(x)
        
        # Classifier
        
        # Flatten 64 x 28 x 28 input tensor to first fully conntected layer
        x = x.view(x.size(0), -1)
        
        # Add fully connected hidden layer 1 with batch normalization and relu activation function
        x = F.relu(self.batch_norm(self.fc1(x)))
        
        # Add dropout layer
        x = self.dropout(x)
        
        # Add fully connected hidden layer 2 without any activation function => return class scores
        # A softmax function will be added by the criterion
        x = self.fc2(x)
        
        # Return output tensor
        return x
    
#-#-# You do NOT have to modify the code below this line. #-#-#

# instantiate the CNN
model_scratch = Net()

# move tensors to GPU if CUDA is available
if use_cuda:
    model_scratch.cuda()

Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step.

Answers to Question 4:

  • The input tensor shape (3, 224, 224) is given by the image size, which is used by the other pre-trained CNNs we are comparing this hand-drafted CNN with.
  • The number of output classes resp. the output tensor shape (1, 133) is given by the number of dog breeds to be classified.
  • The purpose of the first part of the stack is to detect features (characteristic points, edges, corners or models) of the objects to be detected. This is done by convolutional layers with max pooling to reduce the dimensions.
  • The lowest part of the CNN consists of a stack of three convolutional layers with small 3x3 convolutional kernels with stride = 1 and padding = 1 what leads to an equally sized filter output per channel. These filter outputs are then passed through a rectified linear unit activation function, and then pass a max pooling layer with a 2x2 kernels each, what divides the dimensions of the max pool output layers by two after each layer. At the same time the depth of the convolutional layer stack is doubled.
  • Conv layer 1: input tensor shape = (3, 224, 224), output tensor shape = (16, 112, 112)
  • Conv lLayer 2: input tensor shape = (16, 112, 112), output tensor shape = (32, 56, 56)
  • Conv layer 3: input tensor shape = (32, 56, 56), output tensor shape = (64, 28, 28)
  • The number of convolutional layers determines the complexity of features the CNN is able to learn. The potential complexity increases with the number of convolutional layers. However, I am limited with my GPU resources on my local machine. So I stuck with three convlutional layers (and only 2 fully connected layers).
  • After the convolutional stack a classifier with fully connected layers is added to transform the information contained in the output feature maps into class probabilities. The feature maps of conv layer 3 are therefore flattened to a (1, 64 28 28) tensor
  • I have added one 1D batch normalization layer to enhance the classification by a more balanced distribution of values in the first fully connected layer
  • I have used 2 fully connected layers because that worked with the mnist example, too.
  • Fully connected layer 1: input tensor shape = (1, 64 28 28) = (1, 50176)
  • Fully connected layer 2: input tensor shape = (1, 399)
  • Final output tensor shape = (1, 133) => 133 dog breed classes
  • As a regularization means I have added an additional dropout layer with 25% dropout probability after each layer (except for the final layer)
  • The input tensor is normalized anyway be image transoformation in training, validation, test and prediction
  • The softmax function is not added to the output layer because it is already included in the optimization criterion (CrossEntropyLoss)

(IMPLEMENTATION) Specify Loss Function and Optimizer

Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_scratch, and the optimizer as optimizer_scratch below.

In [45]:
import torch.optim as optim

### TODO: select loss function
criterion_scratch = nn.CrossEntropyLoss()

### TODO: select optimizer
# optimizer_scratch = optim.SGD(model_scratch.parameters(), lr=0.01)
optimizer_scratch = optim.SGD(model_scratch.parameters(), lr=0.005)
# optimizer_scratch = optim.SGD(model_scratch.parameters(), lr=0.01, momentum=0.9)

(IMPLEMENTATION) Train and Validate the Model

Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_scratch.pt'.

In [46]:
# the following import is required for training to be robust to truncated images
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
    """returns trained model"""
    # initialize tracker for minimum validation loss
    valid_loss_min = np.Inf
    
    # initialize variables to monitor training and validation loss
    train_loss = 0
    valid_loss = 0
    train_loss_progress = np.zeros(n_epochs)
    valid_loss_progress = np.zeros(n_epochs)
    
    for epoch in range(1, n_epochs+1):
                        
        ###################
        # train the model #
        ###################
        model.train()
        for batch_idx, (data, target) in enumerate(loaders['train']):
            # move to GPU
            if use_cuda:
                data, target = data.cuda(), target.cuda()
            ## find the loss and update the model parameters accordingly
            ## record the average training loss, using something like
            ## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
            
            # Switch off gradients
            optimizer.zero_grad()
            # Forward pass: Compute predicted outputs by passing an input tensor to the model
            output = model(data)
            # Calculate the current batch loss
            loss = criterion(output, target)
            # Backward pass: Compute gradient of the batch loss with respect to model parameters
            loss.backward()
            # Perform a single optimization step and update the model parameters
            optimizer.step()
            # Update running training loss by the current batch loss
            train_loss += loss.item()*data.size(0)
            # train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.item() - train_loss))
            train_loss_progress[epoch-1] = train_loss
            
        ######################    
        # validate the model #
        ######################
        model.eval()
        for batch_idx, (data, target) in enumerate(loaders['valid']):
            # move to GPU
            if use_cuda:
                data, target = data.cuda(), target.cuda()
            ## update the average validation loss
            
            # Forward pass: Compute predicted outputs by passing an input tensor to the model
            output = model(data)
            # Calculate the current batch loss
            loss = criterion(output, target)
            # Update running validation loss by the current batch loss
            valid_loss += loss.item()*data.size(0)
            # valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.item() - valid_loss))
            valid_loss_progress[epoch-1] = valid_loss
        
        # calculate average loss over an epoch
        train_loss = train_loss/len(loaders['train'].sampler)
        valid_loss = valid_loss/len(loaders['valid'].sampler)
        train_loss_progress[epoch-1] = train_loss
        valid_loss_progress[epoch-1] = valid_loss
            
        # print training/validation statistics 
        print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
            epoch, 
            train_loss,
            valid_loss
            ))
        
        ## TODO: save the model if validation loss has decreased
        if valid_loss <= valid_loss_min:
            print('Validation loss decreased ({:.6f} --> {:.6f}).  Saving model ...'.format(
                valid_loss_min,
                valid_loss))
            torch.save(model.state_dict(), save_path)
            valid_loss_min = valid_loss
        
    # Return trained model
    return model, train_loss_progress, valid_loss_progress
In [47]:
# train the model
# Remark: After 100 iterations and a learning rate of 0.01 the model starts overfitting
model_scratch, train_loss_progress_scratch, valid_loss_progress_scratch = train(100, loaders_scratch,
                                                                                model_scratch,
                                                                                optimizer_scratch,
                                                                                criterion_scratch,
                                                                                use_cuda,
                                                                                'model_scratch.pt')

# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
Epoch: 1 	Training Loss: 4.774740 	Validation Loss: 4.910595
Validation loss decreased (inf --> 4.910595).  Saving model ...
Epoch: 2 	Training Loss: 4.577092 	Validation Loss: 4.899822
Validation loss decreased (4.910595 --> 4.899822).  Saving model ...
Epoch: 3 	Training Loss: 4.479470 	Validation Loss: 4.921536
Epoch: 4 	Training Loss: 4.390660 	Validation Loss: 4.897529
Validation loss decreased (4.899822 --> 4.897529).  Saving model ...
Epoch: 5 	Training Loss: 4.329453 	Validation Loss: 4.928772
Epoch: 6 	Training Loss: 4.270938 	Validation Loss: 4.925026
Epoch: 7 	Training Loss: 4.220386 	Validation Loss: 4.993670
Epoch: 8 	Training Loss: 4.183966 	Validation Loss: 5.079481
Epoch: 9 	Training Loss: 4.139488 	Validation Loss: 4.994446
Epoch: 10 	Training Loss: 4.097811 	Validation Loss: 5.143667
Epoch: 11 	Training Loss: 4.058086 	Validation Loss: 5.149997
Epoch: 12 	Training Loss: 4.034110 	Validation Loss: 5.137150
Epoch: 13 	Training Loss: 3.983187 	Validation Loss: 5.136340
Epoch: 14 	Training Loss: 3.959490 	Validation Loss: 5.123335
Epoch: 15 	Training Loss: 3.933079 	Validation Loss: 5.049232
Epoch: 16 	Training Loss: 3.894490 	Validation Loss: 5.168196
Epoch: 17 	Training Loss: 3.851457 	Validation Loss: 5.085882
Epoch: 18 	Training Loss: 3.828754 	Validation Loss: 4.878054
Validation loss decreased (4.897529 --> 4.878054).  Saving model ...
Epoch: 19 	Training Loss: 3.791428 	Validation Loss: 5.008540
Epoch: 20 	Training Loss: 3.764619 	Validation Loss: 4.877504
Validation loss decreased (4.878054 --> 4.877504).  Saving model ...
Epoch: 21 	Training Loss: 3.726508 	Validation Loss: 4.916160
Epoch: 22 	Training Loss: 3.701613 	Validation Loss: 4.826446
Validation loss decreased (4.877504 --> 4.826446).  Saving model ...
Epoch: 23 	Training Loss: 3.674044 	Validation Loss: 4.761842
Validation loss decreased (4.826446 --> 4.761842).  Saving model ...
Epoch: 24 	Training Loss: 3.638596 	Validation Loss: 4.816720
Epoch: 25 	Training Loss: 3.613296 	Validation Loss: 4.649992
Validation loss decreased (4.761842 --> 4.649992).  Saving model ...
Epoch: 26 	Training Loss: 3.576899 	Validation Loss: 4.658418
Epoch: 27 	Training Loss: 3.544381 	Validation Loss: 4.681472
Epoch: 28 	Training Loss: 3.508053 	Validation Loss: 4.552165
Validation loss decreased (4.649992 --> 4.552165).  Saving model ...
Epoch: 29 	Training Loss: 3.472073 	Validation Loss: 4.568822
Epoch: 30 	Training Loss: 3.452411 	Validation Loss: 4.556215
Epoch: 31 	Training Loss: 3.410250 	Validation Loss: 4.506886
Validation loss decreased (4.552165 --> 4.506886).  Saving model ...
Epoch: 32 	Training Loss: 3.393646 	Validation Loss: 4.513109
Epoch: 33 	Training Loss: 3.356873 	Validation Loss: 4.342378
Validation loss decreased (4.506886 --> 4.342378).  Saving model ...
Epoch: 34 	Training Loss: 3.331049 	Validation Loss: 4.457072
Epoch: 35 	Training Loss: 3.293418 	Validation Loss: 4.437878
Epoch: 36 	Training Loss: 3.253995 	Validation Loss: 4.464668
Epoch: 37 	Training Loss: 3.220226 	Validation Loss: 4.401335
Epoch: 38 	Training Loss: 3.183215 	Validation Loss: 4.332944
Validation loss decreased (4.342378 --> 4.332944).  Saving model ...
Epoch: 39 	Training Loss: 3.167729 	Validation Loss: 4.144546
Validation loss decreased (4.332944 --> 4.144546).  Saving model ...
Epoch: 40 	Training Loss: 3.124243 	Validation Loss: 4.240450
Epoch: 41 	Training Loss: 3.102269 	Validation Loss: 4.179385
Epoch: 42 	Training Loss: 3.058445 	Validation Loss: 4.197492
Epoch: 43 	Training Loss: 3.036938 	Validation Loss: 4.386422
Epoch: 44 	Training Loss: 3.006616 	Validation Loss: 4.100856
Validation loss decreased (4.144546 --> 4.100856).  Saving model ...
Epoch: 45 	Training Loss: 2.967368 	Validation Loss: 4.148072
Epoch: 46 	Training Loss: 2.944734 	Validation Loss: 4.135542
Epoch: 47 	Training Loss: 2.906719 	Validation Loss: 4.042551
Validation loss decreased (4.100856 --> 4.042551).  Saving model ...
Epoch: 48 	Training Loss: 2.859206 	Validation Loss: 4.000137
Validation loss decreased (4.042551 --> 4.000137).  Saving model ...
Epoch: 49 	Training Loss: 2.850739 	Validation Loss: 4.080852
Epoch: 50 	Training Loss: 2.815630 	Validation Loss: 4.169876
Epoch: 51 	Training Loss: 2.772151 	Validation Loss: 4.070411
Epoch: 52 	Training Loss: 2.748722 	Validation Loss: 3.934216
Validation loss decreased (4.000137 --> 3.934216).  Saving model ...
Epoch: 53 	Training Loss: 2.702873 	Validation Loss: 4.044623
Epoch: 54 	Training Loss: 2.689936 	Validation Loss: 3.890700
Validation loss decreased (3.934216 --> 3.890700).  Saving model ...
Epoch: 55 	Training Loss: 2.645098 	Validation Loss: 3.911111
Epoch: 56 	Training Loss: 2.627000 	Validation Loss: 3.994045
Epoch: 57 	Training Loss: 2.589336 	Validation Loss: 3.958560
Epoch: 58 	Training Loss: 2.568667 	Validation Loss: 3.998787
Epoch: 59 	Training Loss: 2.514870 	Validation Loss: 3.988306
Epoch: 60 	Training Loss: 2.516490 	Validation Loss: 3.977315
Epoch: 61 	Training Loss: 2.470181 	Validation Loss: 3.861565
Validation loss decreased (3.890700 --> 3.861565).  Saving model ...
Epoch: 62 	Training Loss: 2.437332 	Validation Loss: 3.897702
Epoch: 63 	Training Loss: 2.404914 	Validation Loss: 3.852013
Validation loss decreased (3.861565 --> 3.852013).  Saving model ...
Epoch: 64 	Training Loss: 2.394510 	Validation Loss: 3.915430
Epoch: 65 	Training Loss: 2.353695 	Validation Loss: 3.979491
Epoch: 66 	Training Loss: 2.330840 	Validation Loss: 3.829216
Validation loss decreased (3.852013 --> 3.829216).  Saving model ...
Epoch: 67 	Training Loss: 2.330756 	Validation Loss: 3.965885
Epoch: 68 	Training Loss: 2.277364 	Validation Loss: 3.904755
Epoch: 69 	Training Loss: 2.256584 	Validation Loss: 3.961562
Epoch: 70 	Training Loss: 2.227415 	Validation Loss: 3.862498
Epoch: 71 	Training Loss: 2.209466 	Validation Loss: 3.819016
Validation loss decreased (3.829216 --> 3.819016).  Saving model ...
Epoch: 72 	Training Loss: 2.161908 	Validation Loss: 3.816728
Validation loss decreased (3.819016 --> 3.816728).  Saving model ...
Epoch: 73 	Training Loss: 2.125590 	Validation Loss: 4.017470
Epoch: 74 	Training Loss: 2.123881 	Validation Loss: 3.842767
Epoch: 75 	Training Loss: 2.081145 	Validation Loss: 3.859449
Epoch: 76 	Training Loss: 2.055915 	Validation Loss: 3.893897
Epoch: 77 	Training Loss: 2.033075 	Validation Loss: 3.880928
Epoch: 78 	Training Loss: 2.015329 	Validation Loss: 4.016550
Epoch: 79 	Training Loss: 1.995290 	Validation Loss: 3.943652
Epoch: 80 	Training Loss: 1.950599 	Validation Loss: 3.783715
Validation loss decreased (3.816728 --> 3.783715).  Saving model ...
Epoch: 81 	Training Loss: 1.942618 	Validation Loss: 3.900359
Epoch: 82 	Training Loss: 1.905407 	Validation Loss: 3.919936
Epoch: 83 	Training Loss: 1.882051 	Validation Loss: 3.870716
Epoch: 84 	Training Loss: 1.877757 	Validation Loss: 3.839604
Epoch: 85 	Training Loss: 1.826822 	Validation Loss: 3.905149
Epoch: 86 	Training Loss: 1.817667 	Validation Loss: 3.894893
Epoch: 87 	Training Loss: 1.818965 	Validation Loss: 3.875054
Epoch: 88 	Training Loss: 1.756262 	Validation Loss: 3.977359
Epoch: 89 	Training Loss: 1.755712 	Validation Loss: 3.978607
Epoch: 90 	Training Loss: 1.724588 	Validation Loss: 3.903069
Epoch: 91 	Training Loss: 1.679623 	Validation Loss: 3.980911
Epoch: 92 	Training Loss: 1.680521 	Validation Loss: 3.853935
Epoch: 93 	Training Loss: 1.652820 	Validation Loss: 3.834007
Epoch: 94 	Training Loss: 1.621606 	Validation Loss: 3.980728
Epoch: 95 	Training Loss: 1.622321 	Validation Loss: 4.019287
Epoch: 96 	Training Loss: 1.582989 	Validation Loss: 4.062647
Epoch: 97 	Training Loss: 1.587155 	Validation Loss: 4.026925
Epoch: 98 	Training Loss: 1.537643 	Validation Loss: 3.948397
Epoch: 99 	Training Loss: 1.530310 	Validation Loss: 3.984931
Epoch: 100 	Training Loss: 1.517246 	Validation Loss: 3.924273
Out[47]:
<All keys matched successfully>
In [48]:
# Visualize the training progress by plotting training and validation loss over the number of epochs
plt.plot(train_loss_progress_scratch, label='Training loss')
plt.plot(valid_loss_progress_scratch, label='Validation loss')
plt.title('Training and validation loss during training process (model_strach)')
plt.xlabel('Number of Epochs'), plt.ylabel('Loss')
plt.legend(frameon=False)
Out[48]:
<matplotlib.legend.Legend at 0x7fa03f0d1ba8>

(IMPLEMENTATION) Test the Model

Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.

In [49]:
def test(loaders, model, criterion, use_cuda):

    # monitor test loss and accuracy
    test_loss = 0.
    correct = 0.
    total = 0.

    model.eval()
    for batch_idx, (data, target) in enumerate(loaders['test']):
        # move to GPU
        if use_cuda:
            data, target = data.cuda(), target.cuda()
        # forward pass: compute predicted outputs by passing inputs to the model
        output = model(data)
        # calculate the loss
        loss = criterion(output, target)
        # update average test loss 
        # test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
        test_loss += loss.item()*data.size(0)
        # convert output probabilities to predicted class
        pred = output.data.max(1, keepdim=True)[1]
        # compare predictions to true label
        correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
        total += data.size(0)
            
    print('\nOverall test loss: {:.6f}\n'.format(test_loss))
    
    return test_loss, correct, total
In [50]:
# call test function    
test_loss, correct, total = test(loaders_scratch, model_scratch, criterion_scratch, use_cuda)

# Test accuracy
print('\nTest accuracy: %2d%% (%2d/%2d)\n' % (100. * correct / total, correct, total))
Overall test loss: 3122.777698


Test accuracy: 14% (123/836)


Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)

You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.

(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset

Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively).

If you like, you are welcome to use the same data loaders from the previous step, when you created a CNN from scratch.

In [51]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import matplotlib.pyplot as plt

import os
import torch
import torch.nn as nn
from torch import optim
from torchvision import datasets, transforms, models
from torch.utils.data.sampler import SubsetRandomSampler
from collections import OrderedDict
import numpy as np
import time
#import cv2
In [52]:
## TODO: Specify data loaders
loaders_transfer = loaders_scratch
# print('Data loaders for transfer learning model: ', loaders_transfer)

(IMPLEMENTATION) Model Architecture

Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable model_transfer.

In [53]:
# check if CUDA is available
use_cuda = torch.cuda.is_available()
In [54]:
# define ResNet50 model using pretrained weights
model_transfer = models.resnet50(pretrained=True)
# Inspect the modules of the pretrained model
print(model_transfer)
# list(model_transfer.modules())
ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer2): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer3): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (4): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (5): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer4): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=2048, out_features=1000, bias=True)
)
In [55]:
# Freeze parameters of the base model so we don't backprop through them
for param in model_transfer.parameters():
    param.requires_grad = False

# Keep the convolutional part of the model as "Feature Extractor" and ...
# replace the final linear layer (classifier) with a new classifier that is trained / adapted to this problem
model_transfer.fc = nn.Linear(in_features=2048, out_features=133, bias=True)
print(model_transfer)
ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer2): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer3): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (4): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (5): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer4): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=2048, out_features=133, bias=True)
)
In [56]:
# Move model to gpu if CUDA is available
if use_cuda:
    model_transfer = model_transfer.cuda()

Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.

Answers to Question 5:

  • For human detection I just reuse the proposed haar cascade classifier from opencv. I have also tried out mtcnn from facenet_pytorch on my local machine but it needs a higher version of pytorch. It doesn't seem to be so much better than the haar cascade itself excapt for it also exports the landmarks for eyes, nose and mouth, which are not needed here.

  • The pre-trained torchvision models show very good performance. They perform ways better than a quickly hand-crafted classifier. They are also comparatively easy to implement resp. to adapt to e. g. the dog breed classification problem without any additional training effort in case they can be directly used. VGG16 for instance is already capable to classify 119 dog breed classes from image net. So I want to go first for a pre-trained model if I can. ResNet50 seems to slightly outperform VGG16 on the test set. So I would prefere this one.

  • For dogbreed detection I have chosen the ResNet50 model, which is adapted to dog breed detection via transfer learning considering a higher number of dog breeds (133 dog breeds) in the data set of this course compared to the pre-defined dog breeds covered by ImageNet (119 dog breeds).

  • When applying transfer learning I make use the feature detection capabilities of base network's (e. g. ResNet50) pre-trained feature detector with convolutional layers (parameters are frozen resp. gradients are switched off) and replace the classifier head with a new one. In this case I have chosen a new small classifier head with two fully connected layers, which is then trained on the 133 dog breed classes of this course.

  • As we are searching for a unique classification we need to apply some kind of softmax or argmax function on the transfer learning CNN's new output with 133 classes to pick the most likely one.

  • For training, validation and testing nn.CrossEntropy() is chosen as optimization criterion. A softmax function is already included in the nn.CrossEntropy() criterion, which combines both nn.LogSoftmax and nn.NLLLoss function. So the classifier head needs to output the class scores, wich are then directly feed to the criterion during optimization.

  • For prediction we need to add the softmax or another argmax function if it is only to decide which is the most likely class prediction.

  • Due to slow computation in workspace and on my local machine I let the backpropagation only run a few epochs fare before I see any overfitting.

(IMPLEMENTATION) Specify Loss Function and Optimizer

Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_transfer, and the optimizer as optimizer_transfer below.

In [57]:
# Define loss function
criterion_transfer = nn.CrossEntropyLoss()

# Define optimization method
# optimizer_transfer = optim.SGD(model.fc.parameters(), lr=0.001)
optimizer_transfer = optim.Adam(model_transfer.fc.parameters(), lr=0.001)

(IMPLEMENTATION) Train and Validate the Model

Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_transfer.pt'.

In [58]:
# train and validate the transfer learning model
# model_transfer = train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
model_transfer, train_loss_progress_transfer, valid_loss_progress_transfer = train(15,
                                                                                   loaders_transfer,
                                                                                   model_transfer,
                                                                                   optimizer_transfer,
                                                                                   criterion_transfer,
                                                                                   use_cuda,
                                                                                   'model_transfer.pt')

# load the model that got the best validation accuracy (uncomment the line below)
model_transfer.load_state_dict(torch.load('model_transfer.pt'))
Epoch: 1 	Training Loss: 2.572223 	Validation Loss: 1.318518
Validation loss decreased (inf --> 1.318518).  Saving model ...
Epoch: 2 	Training Loss: 1.069146 	Validation Loss: 0.987102
Validation loss decreased (1.318518 --> 0.987102).  Saving model ...
Epoch: 3 	Training Loss: 0.826666 	Validation Loss: 0.875864
Validation loss decreased (0.987102 --> 0.875864).  Saving model ...
Epoch: 4 	Training Loss: 0.729854 	Validation Loss: 0.805370
Validation loss decreased (0.875864 --> 0.805370).  Saving model ...
Epoch: 5 	Training Loss: 0.618849 	Validation Loss: 0.785993
Validation loss decreased (0.805370 --> 0.785993).  Saving model ...
Epoch: 6 	Training Loss: 0.582611 	Validation Loss: 0.819104
Epoch: 7 	Training Loss: 0.542945 	Validation Loss: 0.789029
Epoch: 8 	Training Loss: 0.496252 	Validation Loss: 0.808528
Epoch: 9 	Training Loss: 0.485606 	Validation Loss: 0.802408
Epoch: 10 	Training Loss: 0.461163 	Validation Loss: 0.813395
Epoch: 11 	Training Loss: 0.430234 	Validation Loss: 0.798611
Epoch: 12 	Training Loss: 0.430357 	Validation Loss: 0.771141
Validation loss decreased (0.785993 --> 0.771141).  Saving model ...
Epoch: 13 	Training Loss: 0.403627 	Validation Loss: 0.826807
Epoch: 14 	Training Loss: 0.385528 	Validation Loss: 0.834524
Epoch: 15 	Training Loss: 0.355122 	Validation Loss: 0.837122
Out[58]:
<All keys matched successfully>
In [59]:
# Visualize the training progress by plotting training and validation loss over the number of epochs
plt.plot(train_loss_progress_transfer, label='Training loss')
plt.plot(valid_loss_progress_transfer, label='Validation loss')
plt.title('Training and validation loss during training process (transfer learning model)')
plt.xlabel('Number of Epochs'), plt.ylabel('Loss')
plt.legend(frameon=False)
plt.show()

(IMPLEMENTATION) Test the Model

Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.

In [60]:
# test the transfer learning model
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
Overall test loss: 707.084709

Out[60]:
(707.0847091674805, 621.0, 836.0)

(IMPLEMENTATION) Predict Dog Breed with the Model

Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan hound, etc) that is predicted by your model.

In [69]:
# list of class names by index, i.e. a name can be accessed like class_names[0]
# class_names = [item[4:].replace("_", " ") for item in data_transfer['train'].classes]
dog_breed_class_names = [item[4:].replace("_", " ") for item in image_datasets['train'].classes]
#print(dog_breed_class_names)
In [62]:
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.

def predict_breed_transfer(img_path):
    # load the image and return the predicted dog breed
    
    # Set up image transformations expected by the model            
    predict_transforms = transforms.Compose([transforms.Resize(size=336),
                                             transforms.CenterCrop(size=224),
                                             transforms.ToTensor(),
                                             transforms.Normalize([0.485, 0.456, 0.406],
                                                                  [0.229, 0.224, 0.225])])
    
    # Open input image using PIL / pillow
    input_image = Image.open(img_path)
    
    # Transform input image to input tensor
    input_tensor = predict_transforms(input_image)
    
    # Reshape input tensor and move to cuda device
    if use_cuda:
        input_tensor = input_tensor.view(1, 3, 224, 224).cuda()
    else:
        input_tensor = input_tensor.view(1, 3, 224, 224)
    
    # Set model to evaluation mode
    model_transfer.eval()
    
    # Switch off gradients for forward prediction step
    with torch.no_grad():
        
        # Move input images to the default device
        if use_cuda:
            input_tensor = input_tensor.cuda()
        
        # Get probability scores from the model output
        prob_scores = model_transfer.forward(input_tensor)
        
        # Get the top candicate
        topk, topclass = prob_scores.topk(1, dim=1)
        
        # Move topclass tensor to cpu and convert to integer using numpy()
        topclass_idx = int(topclass.cpu().numpy().squeeze())
        
    # Get dog breed label from dog_breed_class_names dictionary
    dog_breed_label = dog_breed_class_names[topclass_idx]
    
    # Return dog breed class label
    return dog_breed_label
In [63]:
def show_image(img_path, title="no title"):
    img = Image.open(img_path)
    plt.title(title)
    plt.imshow(img)
    plt.show()
In [64]:
import random

# Test the dog breed detector with random images from human files
for img_path in random.sample(list(human_files), 5): 
    predicted_breed = predict_breed_transfer(img_path)
    show_image(img_path, title=f"Predicted:{predicted_breed}")
    
# Test the dog breed detector with random images from dog files
for img_path in random.sample(list(dog_files), 5): 
    predicted_breed = predict_breed_transfer(img_path)
    show_image(img_path, title=f"Predicted:{predicted_breed}")

Step 5: Write your Algorithm

Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,

  • if a dog is detected in the image, return the predicted breed.
  • if a human is detected in the image, return the resembling dog breed.
  • if neither is detected in the image, provide output that indicates an error.

You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and dog_detector functions developed above. You are required to use your CNN from Step 4 to predict dog breed.

Some sample output for our algorithm is provided below, but feel free to design your own user experience!

Sample Human Output

(IMPLEMENTATION) Write your Algorithm

In [65]:
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.

def run_app(img_path):
    ## handle cases for a human face, dog, and neither
    
    # check if human faces are detected in the given image using opencv haar cascade classifier
    face_bounding_boxes, faces_per_image = face_bb_detector(img_path, debug_mode=False)
    
    # check if dogs are detected in the given image using VGG16 detector
    dog_prediction, _, _ = ResNet50_dog_detector(img_path, debug_mode=False)
    
    # Get dog breed using ResNet50 trained on 133 classes
    dog_breed_label = predict_breed_transfer(img_path)
       
    if faces_per_image == 0:
        if dog_prediction:
            # Show image
            show_image(img_path, title='This seems to be a dog')
            # Display prediction
            print('The face bounding box detector says there is no human face in the image.')
            print('The dog detector says there is a dog (breed: ', dog_breed_label, ') in the image.')
        else:
            # Show image
            show_image(img_path, title='This seems to be neither human nor dog')
            # Display prediction
            print('The face bounding box detector says there is no human face in the image.')
            print('The dog detector says there is no dog in the image.')
    elif faces_per_image == 1:
        if dog_prediction:
            # Show image
            show_image(img_path, title='This seems to be a human with a dog')
            # Display prediction
            print('The face bounding box detector has detected 1 human face in the image.')
            print('The dog detector says there is a dog (breed: ', dog_breed_label, ') in the image.')
        else:
            # Show image
            show_image(img_path, title='This seems to be a human')
            # Display prediction
            print('The face bounding box detector has detected 1 human face in the image.')
            print('The dog detector says there is no dog in the image.')        
    else:
        if dog_prediction:
            # Show image
            show_image(img_path, title='This seems to be some humans with a dog')
            # Display prediction
            print('The face bounding box detector hase detected ', faces_per_image, ' human faces in the image.')
            print('The dog detector says there is a dog (breed: ', dog_breed_label, ') in the image.')
        else:
            # Show image
            show_image(img_path, title='This seems to be some humans')
            # Display prediction
            print('The face bounding box detector hase detected ', faces_per_image, ' human faces in the image.')
            print('The dog detector says there is no dog in the image.')
    print('\n\n\n')

Step 6: Test Your Algorithm

In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?

(IMPLEMENTATION) Test Your Algorithm on Sample Images!

Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.

Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.

Answers to Question 6: (Three possible points for improvement)

  • Analyse and compare further models (evtl. try different classifier head architectures attached to the base network for transfer learning)
  • Enlarge and enrich training data set with further human and dog images
  • Longer model training (pick the point before overfitting starts)
  • Hyperparameter tuning
  • Introduce classifier ensembles for both human and dog detection using majority vote
  • Code optimization
In [66]:
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.

## suggested code, below
for file in np.hstack((human_files[:3], dog_files[:3])):
    run_app(file)
The face bounding box detector has detected 1 human face in the image.
The dog detector says there is no dog in the image.




The face bounding box detector has detected 1 human face in the image.
The dog detector says there is no dog in the image.




The face bounding box detector has detected 1 human face in the image.
The dog detector says there is no dog in the image.




The face bounding box detector says there is no human face in the image.
The dog detector says there is a dog (breed:  Border terrier ) in the image.




The face bounding box detector says there is no human face in the image.
The dog detector says there is a dog (breed:  Border terrier ) in the image.




The face bounding box detector says there is no human face in the image.
The dog detector says there is a dog (breed:  Border terrier ) in the image.




In [67]:
additional_test_images = ('dog_test_image_01.jpg',
                          'dog_test_image_02.jpg',
                          'dog_test_image_03.jpg',
                          'dog_test_image_04.jpg',
                          'dog_test_image_05.jpg',
                          'ape_test_image_01.jpg',
                          'human_test_image_01.jpg',
                          'human_test_image_02.jpg',
                          'human_test_image_03.jpg',
                          'human_test_image_04.jpg',
                          'human_test_image_05.jpg',
                         )
In [68]:
for file in additional_test_images:
    run_app(file)
The face bounding box detector says there is no human face in the image.
The dog detector says there is a dog (breed:  Bloodhound ) in the image.




The face bounding box detector says there is no human face in the image.
The dog detector says there is a dog (breed:  Pomeranian ) in the image.




The face bounding box detector says there is no human face in the image.
The dog detector says there is a dog (breed:  Yorkshire terrier ) in the image.




The face bounding box detector says there is no human face in the image.
The dog detector says there is no dog in the image.




The face bounding box detector says there is no human face in the image.
The dog detector says there is a dog (breed:  German shepherd dog ) in the image.




The face bounding box detector says there is no human face in the image.
The dog detector says there is no dog in the image.




The face bounding box detector has detected 1 human face in the image.
The dog detector says there is no dog in the image.




The face bounding box detector has detected 1 human face in the image.
The dog detector says there is no dog in the image.




The face bounding box detector has detected 1 human face in the image.
The dog detector says there is no dog in the image.




The face bounding box detector has detected 1 human face in the image.
The dog detector says there is no dog in the image.




The face bounding box detector has detected 1 human face in the image.
The dog detector says there is a dog (breed:  Cairn terrier ) in the image.




In [ ]: